Skip to content

Conversation

@lukesmmr
Copy link
Member

@lukesmmr lukesmmr commented Feb 8, 2026

Thread source, external_id, and source_url through the entire stack (model, DTOs, validation, service, REST routes, GraphQL, protobuf, OpenAPI) so the mobile app can distinguish official government data from user-generated reports. Add server-side status and source filtering to the /near endpoint, surface the already-computed distance_km in responses, and include a safe ALTER TABLE migration for existing databases. Seed data updated with realistic source values.

Summary by CodeRabbit

Release Notes

  • New Features
    • Disasters now track data source information with source, external ID, and source URL fields
    • Distance in kilometers is now returned for nearby disaster searches
    • Added filtering by source and status in disaster queries and API endpoints

Thread source, external_id, and source_url through the entire stack
(model, DTOs, validation, service, REST routes, GraphQL, protobuf,
OpenAPI) so the mobile app can distinguish official government data
from user-generated reports. Add server-side status and source
filtering to the /near endpoint, surface the already-computed
distance_km in responses, and include a safe ALTER TABLE migration
for existing databases. Seed data updated with realistic source values.

Co-authored-by: Cursor <cursoragent@cursor.com>
@coderabbitai
Copy link

coderabbitai bot commented Feb 8, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review
📝 Walkthrough

Walkthrough

The changes extend the disaster management system with source tracking capabilities and metadata fields (source, externalId, sourceUrl, distanceKm). New SQL migrations initialize these fields, DTOs map raw database rows to typed objects, and GraphQL/REST APIs gain filtering support for source and status parameters. Seed data is augmented with source attributes.

Changes

Cohort / File(s) Summary
Core Model & Schema
disaster.model.ts, prisma/schema.prisma, proto/disaster.proto
Added fields to Disaster: source (default 'official'), externalId/external_id, sourceUrl/source_url, and distanceKm/distance_km. Introduced new SQL helpers for safe column addition and indexing on source and (source, externalId) uniqueness.
Data Transfer Objects
dto/disaster.dto.ts, dto/disaster.dto.test.ts
Extended DisasterInput/InputDTO with source and source metadata fields. Enhanced DisasterResponseDTO to support raw database rows and snake_case field mapping (e.g., external_id → externalId). Added comprehensive mapping and default handling tests.
API Schema & Validation
graphql/schema.ts, validation/disaster.ts, openapi.json
Added new fields to Disaster schema and DisasterInput type. Extended validation schemas with source filtering on disasters query and nearQuerySchema to accept status and source filters. Updated OpenAPI spec to reflect schema changes and new query parameters.
Service Layer
services/disaster.service.ts
Introduced BASE_SELECT_COLUMNS constant for field standardization. Enhanced filtering across getAllDisasters, countDisasters, and bulkInsertDisasters to support source. Refactored findDisastersNear signature to accept params object with status and source filters; added distance_km calculation and sorting.
GraphQL & REST Routes
graphql/resolvers.ts, routes/disasters.ts, routes/disasters.protobuf.test.ts
Added source filter to disasters query. Enhanced disastersNear resolver to accept and validate status and source parameters. Updated Protobuf serialization to include new fields (source, external_id, source_url, distance_km).
Application Setup & Seed Data
app.ts, scripts/seed.ts
Imported and executed new SQL migrations for source columns and indexes during app initialization. Expanded seed records with source, external_id, and source_url metadata across disaster entries.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Resolver as GraphQL Resolver
    participant Validator as Validation
    participant Service as Disaster Service
    participant DB as Database
    participant Calc as Distance Calc

    Client->>Resolver: Query disastersNear(lat, lng, distance, status?, source?)
    Resolver->>Validator: Validate filters (lat, lng, distance, status?, source?)
    Validator-->>Resolver: ✓ Valid params
    Resolver->>Service: findDisastersNear({lat, lng, distance, status?, source?})
    Service->>DB: SELECT * with dynamic WHERE<br/>(location filter + status + source)
    DB-->>Service: Rows with coordinates
    Service->>Calc: Calculate distance_km for each row
    Calc-->>Service: Rows with distance_km
    Service-->>Resolver: Sorted by distance ↑
    Resolver-->>Client: DisasterResponse[] (with source, externalId, sourceUrl, distanceKm)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 With whiskers aquiver, I track sources so keen,
From here and from there—oh, what metadata's seen!
Each disaster now knows its origin true,
Distance and filters help sort them anew,
A hop through the data, so orderly and bright! ✨🌍

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: adding source tracking fields to distinguish data origin, which aligns with the core changeset across all modified files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/api-expansion-mobile-app

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@lukesmmr
Copy link
Member Author

lukesmmr commented Feb 8, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Feb 8, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
openapi.json (1)

269-273: ⚠️ Potential issue | 🟡 Minor

Stale _id reference in bulk update schema.

Line 272 references "required": ["_id"], but the system uses id (UUID) post-migration. The bulk update route in routes/disasters.ts validates item.id, not item._id. This predates this PR, but since you're updating surrounding schema, it's worth fixing.

Suggested fix
                 "items": {
                   "allOf": [
                     { "$ref": "#/components/schemas/DisasterInput" },
-                    { "required": ["_id"] }
+                    { "required": ["id"] }
                   ]
                 },
services/disaster.service.ts (1)

138-172: ⚠️ Potential issue | 🔴 Critical

GraphQL resolvers pass camelCase keys directly to updateDisaster, causing invalid SQL column names.

The updateDisaster function uses object keys as SQL column names at line 154. While REST routes safely use DisasterInputDTO (which has snake_case properties), both GraphQL resolvers (updateDisaster and bulkUpdateDisasters in graphql/resolvers.ts) pass the GraphQL input directly without transformation. Since the GraphQL schema defines externalId and sourceUrl in camelCase, these become invalid SQL column names like SET externalId = $N instead of SET external_id = $N.

Transform GraphQL inputs to snake_case before passing to the service layer in both resolvers.

🤖 Fix all issues with AI agents
In `@graphql/schema.ts`:
- Around line 40-42: GraphQL inputs use camelCase (externalId, sourceUrl) but
your DTO expects snake_case (external_id, source_url), so update the GraphQL
resolvers (createDisaster, updateDisaster, bulkInsertDisasters,
bulkUpdateDisasters) to map/convert the incoming DisasterInput to the DTO shape
before passing to the service — e.g. construct or call a mapper to produce a
DisasterInputDTO with keys external_id and source_url (and all other fields
translated) or add a small helper that transforms camelCase input to snake_case
and use it when building DisasterInputDTO; ensure the resolver no longer passes
the raw GraphQL input directly to services.

In `@services/disaster.service.ts`:
- Around line 250-258: findDisastersNear currently returns raw DB rows while
other methods call formatDisasterDate; update findDisastersNear to map the query
result (the Disaster[] returned by the prisma.$queryRawUnsafe call) through
formatDisasterDate before returning so date fields are normalized. Locate the
result variable in findDisastersNear, apply result.map(formatDisasterDate) (or
equivalent) and return that mapped array, leaving the SQL/BASE_SELECT_COLUMNS
and whereClause unchanged.
🧹 Nitpick comments (6)
disaster.model.ts (1)

39-54: Migration SQL is safe and idempotent — minor schema qualification gap.

The DO $$ block with IF NOT EXISTS checks is a solid approach for safe, repeatable migrations. One subtlety: the information_schema.columns query doesn't filter by table_schema, so if multiple schemas contain a disasters table, it could match unintended ones:

-- Current (matches any schema):
WHERE table_name = 'disasters' AND column_name = 'source'

-- Safer:
WHERE table_schema = 'public' AND table_name = 'disasters' AND column_name = 'source'

Low risk in practice if only one schema is in use, but worth hardening.

routes/disasters.protobuf.test.ts (1)

47-73: Explicit source in the test payload is good practice.

The test correctly includes the new field. You could optionally strengthen the assertion to also verify source in the decoded response:

       const decoded = disastersPb.disasters.Disaster.decode(asBuffer(res));
       expect(decoded).toHaveProperty('type', 'fire');
+      expect(decoded).toHaveProperty('source', 'official');

Not required since this test is focused on protobuf content negotiation, but it would improve coverage of the new field end-to-end.

dto/disaster.dto.ts (1)

100-116: DTO mapping logic is solid — one subtle || vs ?? note.

Line 110: raw.source || 'official' uses ||, which means an empty string "" would also fall back to 'official'. This is likely intentional (empty string isn't a valid source), but it's inconsistent with the ?? operator used on lines 111-112 for externalId/sourceUrl. If source should only default when null/undefined (not ""), switch to ??:

-    this.source = raw.source || 'official';
+    this.source = raw.source ?? 'official';

That said, if the validation layer already prevents empty-string sources from reaching here, the current behavior is fine. The distanceKm coercion logic (lines 113-116) is well-handled — guarding against undefined/null before Number() avoids NaN or 0 leaking into responses.

proto/disaster.proto (1)

14-17: Use optional keyword for nullable scalar fields in proto3.

In proto3, scalar fields cannot distinguish "not set" from their default value (empty string "" for strings, 0.0 for double). Since external_id, source_url, and distanceKm are nullable in the model and DTO layers, proto3's default behavior creates ambiguity: distance_km = 0.0 cannot be distinguished from "not computed", and "" cannot be distinguished from genuinely absent for the string fields.

Proto3 supports the optional keyword (protoc v3.15+), which enables explicit presence tracking via has_* methods:

Suggested fix
   string source = 9;
-  string external_id = 10;
-  string source_url = 11;
-  double distance_km = 12;
+  optional string external_id = 10;
+  optional string source_url = 11;
+  optional double distance_km = 12;
graphql/schema.ts (1)

23-26: Consider making source non-nullable in the GraphQL Disaster type.

The database column is NOT NULL DEFAULT 'official' and the DTO defaults to 'official'. Declaring source: String (nullable) in GraphQL is inconsistent. Using source: String! would be more accurate and give clients a stronger contract.

Suggested fix
-    source: String
+    source: String!
     externalId: String
     sourceUrl: String
     distanceKm: Float
services/disaster.service.ts (1)

233-234: SQL concatenation in ST_DWithin / ST_GeomFromText — parameterized but unusual pattern.

Using 'POINT(' || $1 || ' ' || $2 || ')' is safe since $1 and $2 are parameterized values. However, using ST_GeomFromGeoJSON (as done in createDisaster) or ST_MakePoint($1, $2) would be more conventional and avoid the string concatenation inside SQL.

Comment on lines +40 to +42
source: String
externalId: String
sourceUrl: String
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find the resolver files and examine them
find . -type f -name "*.ts" | grep -E "(resolver|graphql)" | head -20

Repository: goodthingsdev/disasters-api

Length of output: 208


🏁 Script executed:

# Also look at the DTO file structure
find . -type f -path "*/dto/*" -name "*.ts" | head -20

Repository: goodthingsdev/disasters-api

Length of output: 119


🏁 Script executed:

# Search for DisasterInput type definition and its usage in resolvers
rg "DisasterInput" --type ts -A 5 -B 2

Repository: goodthingsdev/disasters-api

Length of output: 9736


🏁 Script executed:

cat -n dto/disaster.dto.ts

Repository: goodthingsdev/disasters-api

Length of output: 4013


🏁 Script executed:

cat -n graphql/resolvers.ts | head -100

Repository: goodthingsdev/disasters-api

Length of output: 3848


🏁 Script executed:

cat -n graphql/resolvers.ts | sed -n '100,150p'

Repository: goodthingsdev/disasters-api

Length of output: 2455


🏁 Script executed:

# Look for the createDisaster, updateDisaster, bulkInsertDisasters mutations
rg "createDisaster|updateDisaster|bulkInsertDisasters|bulkUpdateDisasters" graphql/resolvers.ts -A 15

Repository: goodthingsdev/disasters-api

Length of output: 4213


🏁 Script executed:

# Check how the service layer destructures the input
rg "external_id|source_url" services/disaster.service.ts -B 2 -A 2 | head -50

Repository: goodthingsdev/disasters-api

Length of output: 1335


🏁 Script executed:

# Verify the DisasterInput type is being used directly (without DTO wrapper) in resolvers
# and confirm there's no transformation happening
rg "DisasterInput\|DisasterInputDTO" graphql/resolvers.ts

Repository: goodthingsdev/disasters-api

Length of output: 53


🏁 Script executed:

# Double-check: does DisasterInputDTO do any field remapping?
cat -n dto/disaster.dto.ts | sed -n '28,46p'

Repository: goodthingsdev/disasters-api

Length of output: 585


GraphQL input fields will always be null in the database due to camelCase/snake_case mismatch.

The DisasterInput GraphQL type uses camelCase (externalId, sourceUrl), but the DisasterInput interface in dto/disaster.dto.ts expects snake_case (external_id, source_url). The GraphQL resolvers pass input directly to the service layer without wrapping it in DisasterInputDTO, causing destructuring to fail:

const { external_id, source_url } = { externalId, sourceUrl }  // undefined, undefined

This affects all GraphQL mutations (createDisaster, updateDisaster, bulkInsertDisasters, bulkUpdateDisasters). REST endpoints correctly wrap input in DisasterInputDTO, but GraphQL does not, creating a critical discrepancy that silently loses data.

🤖 Prompt for AI Agents
In `@graphql/schema.ts` around lines 40 - 42, GraphQL inputs use camelCase
(externalId, sourceUrl) but your DTO expects snake_case (external_id,
source_url), so update the GraphQL resolvers (createDisaster, updateDisaster,
bulkInsertDisasters, bulkUpdateDisasters) to map/convert the incoming
DisasterInput to the DTO shape before passing to the service — e.g. construct or
call a mapper to produce a DisasterInputDTO with keys external_id and source_url
(and all other fields translated) or add a small helper that transforms
camelCase input to snake_case and use it when building DisasterInputDTO; ensure
the resolver no longer passes the raw GraphQL input directly to services.

Comment on lines 250 to 258
const result = (await prisma.$queryRawUnsafe(
`SELECT id, type, ST_AsGeoJSON(location)::json as location, date, description, status, created_at, updated_at,
`SELECT ${BASE_SELECT_COLUMNS},
ST_Distance(location, ST_GeomFromText('POINT(' || $1 || ' ' || $2 || ')')::geography) / 1000 as distance_km
FROM disasters
WHERE ST_DWithin(location, ST_GeomFromText('POINT(' || $1 || ' ' || $2 || ')')::geography, $3 * 1000)
${whereClause}
ORDER BY distance_km`,
lng,
lat,
distance,
...values,
)) as Disaster[];
return result;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

findDisastersNear does not apply formatDisasterDate, unlike every other query function.

All other query methods (createDisaster, getAllDisasters, getDisasterById, updateDisaster, bulkInsertDisasters) normalize dates via formatDisasterDate, but findDisastersNear returns raw results. This will produce inconsistent date formats (2025-01-01T00:00:00.000Z vs 2025-01-01) between /near and other endpoints.

Suggested fix
   )) as Disaster[];
-  return result;
+  return result.map(formatDisasterDate);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const result = (await prisma.$queryRawUnsafe(
`SELECT id, type, ST_AsGeoJSON(location)::json as location, date, description, status, created_at, updated_at,
`SELECT ${BASE_SELECT_COLUMNS},
ST_Distance(location, ST_GeomFromText('POINT(' || $1 || ' ' || $2 || ')')::geography) / 1000 as distance_km
FROM disasters
WHERE ST_DWithin(location, ST_GeomFromText('POINT(' || $1 || ' ' || $2 || ')')::geography, $3 * 1000)
${whereClause}
ORDER BY distance_km`,
lng,
lat,
distance,
...values,
)) as Disaster[];
return result;
const result = (await prisma.$queryRawUnsafe(
`SELECT ${BASE_SELECT_COLUMNS},
ST_Distance(location, ST_GeomFromText('POINT(' || $1 || ' ' || $2 || ')')::geography) / 1000 as distance_km
FROM disasters
${whereClause}
ORDER BY distance_km`,
...values,
)) as Disaster[];
return result.map(formatDisasterDate);
🤖 Prompt for AI Agents
In `@services/disaster.service.ts` around lines 250 - 258, findDisastersNear
currently returns raw DB rows while other methods call formatDisasterDate;
update findDisastersNear to map the query result (the Disaster[] returned by the
prisma.$queryRawUnsafe call) through formatDisasterDate before returning so date
fields are normalized. Locate the result variable in findDisastersNear, apply
result.map(formatDisasterDate) (or equivalent) and return that mapped array,
leaving the SQL/BASE_SELECT_COLUMNS and whereClause unchanged.

@lukesmmr lukesmmr requested a review from caspear February 8, 2026 11:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant